Skip to content

Add mistral-common library #1641

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 23, 2025
Merged

Conversation

juliendenize
Copy link
Contributor

@juliendenize juliendenize commented Jul 18, 2025

Hi !

We'd like to add vLLM as a library and track the downloads via two possible formats:

  • Transformers with the config.json
  • Mistral with the params.json

The code snippet suggested is through serving.

Edit:
Based on @Wauplin comment we changed the integration to instead add the mistral-common library

@Wauplin
Copy link
Contributor

Wauplin commented Jul 18, 2025

Hi @juliendenize , thanks for the PR. In the ecosystem we consider vLLM as a Local App rather than a library. If you go on https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501?local-app=vllm for instance, you'll see how to serve the model using vLLM. If you are interested in counting the downloads of model repos based on mistral-common, I would suggest to register mistral-common instead. PR would look like this:

# (to add in alphabetical order) 
	"mistral-common": {
		prettyLabel: "mistral-common",
		repoName: "mistral-common",
		repoUrl: "https://github.com/mistralai/mistral-common",
		docsUrl: "https://mistralai.github.io/mistral-common/",
		countDownloads: `path:"config.json" OR path:"params.json"`,
	},

(I haven't added a code snippet as I'm unsure how to use mistral common)

You'll also have to add library_name: mistral-common in the model card metadata of your models.

@Wauplin
Copy link
Contributor

Wauplin commented Jul 18, 2025

Regarding the PR itself, we won't merge it in its current form. vLLM is a widely used app that is not specific to mistral-common (see https://huggingface.co/models?other=vllm) so we don't want the snippet to include mistral-specific instructions for all of them.

@Wauplin
Copy link
Contributor

Wauplin commented Jul 18, 2025

Btw, vllm snippets are defined here. It is possible to update the install command to include pip install -U mistral-common for models with mistral-common tag (e.g. if model.tags.includes("mistral-common") ...)

@juliendenize juliendenize changed the title Add vLLM library Add mistral-common library Jul 18, 2025
@juliendenize
Copy link
Contributor Author

Hi @Wauplin thanks a lot for your detailed feedback and given me the context and the pointers ! much appreciated.

so we don't want the snippet to include mistral-specific instructions for all of them.

This was indeed also a concern for me when I pushed the original PR, and also why I put Mistral related instruction in comments. So, very cool that your proposed solution avoids that.

I updated the PR accordingly.

I just have a question regarding how tags works, is the library_name considered as a tag ? meaning would that be True:

model.tags.includes("mistral-common") // = True ?

Or should I update the PR to check the library_name ?

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @juliendenize , model.tags.includes("mistral-common") is the correct way!

"# Make sure you have the latest version of mistral-common installed:",
"pip install --upgrade mistral-common"
].join("");
dockerCommand = `# Load and run the model:\ndocker exec -it my_vllm_container bash -c "vllm serve ${model.id} --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice"`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out of curiosity, what happens if one runs docker exec -it my_vllm_container bash -c "vllm serve ${model.id}" instead of having all the mistral arguments? In general we avoid putting too many options in the snippets like this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It wouldn't know how to load the model correctly so failing :/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't this be something to fix on vllm side directly? feels weird if all vllm-compatible models can be correctly served with vllm server <model id> except the mistral-common ones that requires a very specific command

(not against adding it, just wondering why it's not a default)

Copy link
Contributor Author

@juliendenize juliendenize Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry my previous info was not complete, here is the behavior:

  • if transformers weights exist, it will load the model with transformers format => not our official way
  • if not, it will fail
    So vLLM cannot know how to load the models without specifying the args.
    And we can't only look if mistral-common is installed as it's a hard dependency of vLLM.


export const mistral_common = (model: ModelData): string[] => [
`# We recommend to use vLLM to serve Mistral AI models.
pip install vllm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is mistral-common usable "as-is" or only under vLLM ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mistral-common is a processing library and we always advise people to use vllm to serve our models (except for GGUFs). vLLM internally integrates mistral-common and use it when we correctly configure the serving.

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for the indications. Before merging, can you make sure some models are listed under https://huggingface.co/models?other=mistral-common ? To do that you need to tag them as such in the model cards (library_name: mistral-common) . You can set tags: - vllm separately if you'd like but it shouldn't be as library_name otherwise the download count rule introduced in this PR will be ignored (since we only take the rule from the "main" library)

juliendenize and others added 2 commits July 23, 2025 13:57
Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: Lucain <lucainp@gmail.com>
@juliendenize
Copy link
Contributor Author

@Wauplin Should be good ! thanks for approving and suggestions

@Wauplin
Copy link
Contributor

Wauplin commented Jul 23, 2025

Hey @juliendenize thanks for the iterations. Last thing, can you run the linter to fix https://github.com/huggingface/huggingface.js/actions/runs/16469991767/job/46561455759?pr=1641 please ?

@juliendenize
Copy link
Contributor Author

Hey @juliendenize thanks for the iterations. Last thing, can you run the linter to fix https://github.com/huggingface/huggingface.js/actions/runs/16469991767/job/46561455759?pr=1641 please ?

Yeah sorry about that @Wauplin just ran pnpm -r format as indicated in the contrib guidelines 🙏

@Wauplin Wauplin merged commit c88fdc4 into huggingface:main Jul 23, 2025
3 of 4 checks passed
@Wauplin
Copy link
Contributor

Wauplin commented Jul 23, 2025

Merged! Expect a few days before getting it live on the Hub :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants